Gambling
The Good Old Days of Sports Gambling
Recent memoirs by the retired bookie Art Manteris and the storied gambler Billy Walters provide a glimpse of an industry in its fledgling form--and a preview of the DraftKings era to come. Las Vegas is no longer the seat of the sportsbook gods. In most states, it's now legal, and extremely popular, to place bets using apps or websites such as FanDuel and DraftKings. From your couch, you can wager on everything from the results of snooker championships to the color of the Gatorade poured over the victorious coach after the Super Bowl. The N.F.L., along with the other major-league American sports associations, has officially partnered with sports-betting sites, and their alliance has proved so lucrative that other industries want in on the action; last month, the Golden Globes made a deal with Polymarket, a predictions-market platform, to encourage wagering (or "trading," if you prefer) on the outcomes of its awards race.
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
- North America > United States > California (0.14)
- North America > United States > New York (0.05)
- (5 more...)
- Leisure & Entertainment > Gambling (1.00)
- Leisure & Entertainment > Sports > Football (0.89)
- Government > Regional Government > North America Government > United States Government (0.47)
Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork
As there exist competitive subnetworks within a dense network in concert with Lottery Ticket Hypothesis, we introduce a novel neuron-wise task incremental learning method, namely Data-free Subnetworks (DSN), which attempts to enhance the elastic knowledge transfer across the tasks that sequentially arrive. Specifically, DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including the reused neurons from prior tasks via neuron-wise masks. And it also transfers possibly valuable knowledge to the earlier tasks via data-free replay.
Most Activation Functions Can Win the Lottery Without Excessive Depth
The strong lottery ticket hypothesis has highlighted the potential for training deep neural networks by pruning, which has inspired interesting practical and theoretical insights into how neural networks can represent functions. For networks with ReLU activation functions, it has been proven that a target network with depth L can be approximated by the subnetwork of a randomly initialized neural network that has double the target's depth 2L and is wider by a logarithmic factor. We show that a depth L+1 is sufficient. This result indicates that we can expect to find lottery tickets at realistic, commonly used depths while only requiring logarithmic overparametrization. Our novel construction approach applies to a large class of activation functions and is not limited to ReLUs. Code is available on Github (RelationalML/LT-existence).
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time.
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
The lottery ticket hypothesis (LTH) states that learning on a properly pruned network (the winning ticket) has improved test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, as the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
Winning the Lottery by Preserving Network Training Dynamics with Concrete Ticket Search
Arora, Tanay, Teuscher, Christof
The Lottery Ticket Hypothesis asserts the existence of highly sparse, trainable subnetworks ('winning tickets') within dense, randomly initialized neural networks. However, state-of-the-art methods of drawing these tickets, like Lottery Ticket Rewinding (LTR), are computationally prohibitive, while more efficient saliency-based Pruning-at-Initialization (PaI) techniques suffer from a significant accuracy-sparsity trade-off and fail basic sanity checks. In this work, we argue that PaI's reliance on first-order saliency metrics, which ignore inter-weight dependencies, contributes substantially to this performance gap, especially in the sparse regime. To address this, we introduce Concrete Ticket Search (CTS), an algorithm that frames subnetwork discovery as a holistic combinatorial optimization problem. By leveraging a Concrete relaxation of the discrete search space and a novel gradient balancing scheme (GRADBALANCE) to control sparsity, CTS efficiently identifies high-performing subnetworks near initialization without requiring sensitive hyperparameter tuning. Motivated by recent works on lottery ticket training dynamics, we further propose a knowledge distillation-inspired family of pruning objectives, finding that minimizing the reverse Kullback-Leibler divergence between sparse and dense network outputs (CTS-KL) is particularly effective. Experiments on varying image classification tasks show that CTS produces subnetworks that robustly pass sanity checks and achieve accuracy comparable to or exceeding LTR, while requiring only a small fraction of the computation. For example, on ResNet-20 on CIFAR10, it reaches 99.3% sparsity with 74.0% accuracy in 7.9 minutes, while LTR attains the same sparsity with 68.3% accuracy in 95.2 minutes. CTS's subnetworks outperform saliency-based methods across all sparsities, but its advantage over LTR is most pronounced in the highly sparse regime.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Europe > Austria (0.04)
- Research Report (1.00)
- Contests & Prizes (1.00)
- North America > Canada (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Leisure & Entertainment > Gambling (0.65)
- Leisure & Entertainment > Games (0.40)
How generative AI in Arc Raiders started a scrap over the gaming industry's future
How generative AI in Arc Raiders started a scrap over the gaming industry's future Don't get Pushing Buttons delivered to your inbox? A rc Raiders is, by all accounts, a late game-of-the-year contender. Dropped into a multiplayer world overrun with hostile drones and military robots, every human player is at the mercy of the machines - and each other. Can you trust the other raider you've spotted on your way back to humanity's safe haven underground, or will they shoot you and take everything you've just scavenged? Perhaps surprisingly, humanity is (mostly) choosing to band together, according to most people I've talked to about this game.
- North America > United States (0.47)
- Oceania > Australia (0.04)
- Europe > Ukraine (0.04)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Leisure & Entertainment > Gambling (1.00)
- Leisure & Entertainment > Sports (0.95)
- Government > Regional Government > North America Government > United States Government (0.47)
- Research Report (0.97)
- Contests & Prizes (0.94)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.05)
- North America > Canada > British Columbia > Vancouver (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (12 more...)